分布式概括(OOD)都是关于对环境变化的学习不变性。如果每个类中的上下文分布均匀分布,则OOD将是微不足道的,因为由于基本原则,可以轻松地删除上下文:类是上下文不变的。但是,收集这种平衡的数据集是不切实际的。学习不平衡的数据使模型偏见对上下文,从而伤害了OOD。因此,OOD的关键是上下文平衡。我们认为,在先前工作中广泛采用的假设,可以直接从偏见的类预测中注释或估算上下文偏差,从而使上下文不完整甚至不正确。相比之下,我们指出了上述原则的另一面:上下文对于类也不变,这激励我们将类(已经被标记为已标记的)视为不同环境以解决上下文偏见(没有上下文标签)。我们通过最大程度地减少阶级样本相似性的对比损失,同时确保这种相似性在所有类别中不变,从而实现这一想法。在具有各种上下文偏见和域间隙的基准测试中,我们表明,配备了我们上下文估计的简单基于重新加权的分类器实现了最新的性能。我们在https://github.com/simpleshinobu/irmcon上提供了附录中的理论理由和代码。
translated by 谷歌翻译
常规的去命名方法依赖于所有样品都是独立且分布相同的假设,因此最终的分类器虽然受到噪声的干扰,但仍然可以轻松地将噪声识别为训练分布的异常值。但是,在不可避免地长尾巴的大规模数据中,该假设是不现实的。这种不平衡的训练数据使分类器对尾巴类别的歧视性较小,而尾巴类别的差异化现在变成了“硬”的噪声 - 它们几乎与干净的尾巴样品一样离群值。我们将这一新挑战介绍为嘈杂的长尾分类(NLT)。毫不奇怪,我们发现大多数拖延方法无法识别出硬噪声,从而导致三个提出的NLT基准测试的性能大幅下降:Imagenet-NLT,Animal10-NLT和Food101-NLT。为此,我们设计了一个迭代嘈杂的学习框架,称为“难以容易”(H2E)。我们的引导理念是首先学习一个分类器作为噪声标识符不变的类和上下文分布变化,从而将“硬”噪声减少到“ Easy”的噪声,其删除进一步改善了不变性。实验结果表明,我们的H2E胜过最先进的方法及其在长尾设置上的消融,同时在传统平衡设置上保持稳定的性能。数据集和代码可从https://github.com/yxymessi/h2e-framework获得
translated by 谷歌翻译
现有的长尾分类(LT)方法仅着眼于解决阶级的失衡,即头部类别的样本多于尾巴类,但忽略了属性的不平衡。实际上,即使班级平衡,由于各种属性,每个类中的样本仍然可能会长时间尾。请注意,后者在根本上比前者更加普遍和具有挑战性,因为属性不仅是大多数数据集的隐含,而且在组合上也具有复杂性,因此平衡的昂贵。因此,我们引入了一个新的研究问题:广义的长尾分类(GLT),共同考虑两种失衡。通过“广义”,我们的意思是,GLT方法自然应该解决传统的LT,但反之亦然。毫不奇怪,我们发现大多数class LT方法在我们提出的两个基准中退化:Imagenet-GLT和Mscoco-GLT。我们认为这是因为他们过分强调了班级分布的调整,同时忽略了学习属性不变的功能。为此,我们提出了一种不变特征学习(IFL)方法,作为GLT的第一个强基线。 IFL首先从不完美的预测中发现具有不同类内分布的环境,然后在其中学习不变的功能。有希望的是,作为改进的功能主链,IFL提高了所有LT阵容:一个/两阶段的重新平衡,增强和合奏。代码和基准可在GitHub上获得:https://github.com/kaihuatang/generalized-long-tailed-benchmarks.pytorch
translated by 谷歌翻译
最近,基于模板的跟踪器已成为领先的跟踪算法,在效率和准确性方面具有希望的性能。然而,查询特征与给定模板之间的相关操作仅利用准确的目标本地化,导致状态估计误差,特别是当目标遭受严重可变形变化时。为了解决这个问题,已经提出了基于分段的跟踪器,以便使用每像素匹配来有效地提高可变形物体的跟踪性能。然而,大多数现有跟踪器仅指初始帧中的目标特征,从而缺乏处理具有挑战性因素的辨别能力,例如,类似的分心,背景杂乱,外观变化等。在此目的,我们提出了一种动态的紧凑型存储器嵌入以增强基于分段的可变形视觉跟踪方法的辨别。具体而言,我们初始化与第一帧中的目标功能嵌入的内存嵌入。在跟踪过程中,与现有内存具有高相关的当前目标特征被更新为在线嵌入的内存。为了进一步提高可变形对象的分割精度,我们采用了点对集的匹配策略来测量像素 - 方向查询特征和整个模板之间的相关性,以捕获更详细的变形信息。关于六个具有挑战性的跟踪基准的广泛评估,包括VOT2016,VOT2018,VOT2019,GOT-10K,TrackingNet和莱斯特展示了我们对近期近似追踪者的方法的优势。此外,我们的方法优于基于出色的基于分段的跟踪器,即DVIS2017基准测试。
translated by 谷歌翻译
As the class size grows, maintaining a balanced dataset across many classes is challenging because the data are long-tailed in nature; it is even impossible when the sample-of-interest co-exists with each other in one collectable unit, e.g., multiple visual instances in one image. Therefore, long-tailed classification is the key to deep learning at scale. However, existing methods are mainly based on reweighting/re-sampling heuristics that lack a fundamental theory. In this paper, we establish a causal inference framework, which not only unravels the whys of previous methods, but also derives a new principled solution. Specifically, our theory shows that the SGD momentum is essentially a confounder in long-tailed classification. On one hand, it has a harmful causal effect that misleads the tail prediction biased towards the head. On the other hand, its induced mediation also benefits the representation learning and head prediction. Our framework elegantly disentangles the paradoxical effects of the momentum, by pursuing the direct causal effect caused by an input sample. In particular, we use causal intervention in training, and counterfactual reasoning in inference, to remove the "bad" while keep the "good". We achieve new state-of-the-arts on three long-tailed visual recognition benchmarks 1 : Long-tailed CIFAR-10/-100, ImageNet-LT for image classification and LVIS for instance segmentation.
translated by 谷歌翻译
Today's scene graph generation (SGG) task is still far from practical, mainly due to the severe training bias, e.g., collapsing diverse human walk on/ sit on/lay on beach into human on beach. Given such SGG, the down-stream tasks such as VQA can hardly infer better scene structures than merely a bag of objects. However, debiasing in SGG is not trivial because traditional debiasing methods cannot distinguish between the good and bad bias, e.g., good context prior (e.g., person read book rather than eat) and bad long-tailed bias (e.g., near dominating behind/in front of). In this paper, we present a novel SGG framework based on causal inference but not the conventional likelihood. We first build a causal graph for SGG, and perform traditional biased training with the graph. Then, we propose to draw the counterfactual causality from the trained graph to infer the effect from the bad bias, which should be removed. In particular, we use Total Direct Effect as the proposed final predicate score for unbiased SGG. Note that our framework is agnostic to any SGG model and thus can be widely applied in the community who seeks unbiased predictions. By using the proposed Scene Graph Diagnosis toolkit 1 on the SGG benchmark Visual Genome and several prevailing models, we observed significant improvements over the previous state-of-the-art methods.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译